现实世界中的大规模图形数据通常是动态而不是静态。数据随着时间的推移而出现的新节点,边缘,甚至是类,例如引用网络和研发协作网络。图形神经网络(GNNS)已成为众多关于图形结构数据的任务的标准方法。在这项工作中,我们采用了两步程序来探索GNN如何递增地适应新的未完成图形数据。首先,我们分析标准基准数据集的转换和归纳学习之间的边缘。在归纳预测后,我们将未标记的数据添加到图表中并显示模型稳定。然后,我们探索不断添加越来越多的标记数据的情况,同时考虑案例,在任何情况下都没有使用类标签注释。此外,我们在图表演变时介绍了新的类,并探索了自动检测来自先前看不见的类学的方法。为了以原则的方式处理不断发展的图形,我们提出了一个终身学习框架,用于图表数据以及评估协议。在本框架中,我们评估代表性的GNN架构。我们观察到模型参数内的隐式知识在显式知识时变得更加重要,即来自过去任务的数据,是有限的。我们发现,在开放世界节点分类中,令人惊讶地少数过去任务的数据足以达到通过从所有过去任务中记住数据达到的性能。在看不见的类检测的具有挑战性任务中,我们发现使用加权交叉熵损失对于稳定性很重要。
translated by 谷歌翻译
图形神经网络(GNNS)已成为诸如节点分类等图形结构数据上的众多任务的标准方法。然而,现实世界图形通常会随着时间的推移而发展,甚至可能出现新的课程。我们将这些挑战塑造为终身学习的实例,其中学习者面临一系列任务,可能接管过去任务中获取的知识。这些知识可以明确地作为历史数据存储或隐含地存储在模型参数中。在这项工作中,我们系统地分析了隐式和明确知识的影响。因此,我们提出了一种在图中终身学习的增量培训方法,并根据$ k $ -.neighborface时间差异引入了一种新的度量,以解决历史数据中的差异。我们将培训方法应用于五个代表性GNN架构,并在三个新的终身节点分类数据集中评估它们。我们的研究结果表明,与培训图表数据的完整历史训练相比,不超过50%的GNN接收领域将至少保留95%的准确性。此外,我们的实验证实,当有更少的明确知识可用时,隐式知识变得更加重要。
translated by 谷歌翻译
Machine learning models are typically evaluated by computing similarity with reference annotations and trained by maximizing similarity with such. Especially in the bio-medical domain, annotations are subjective and suffer from low inter- and intra-rater reliability. Since annotations only reflect the annotation entity's interpretation of the real world, this can lead to sub-optimal predictions even though the model achieves high similarity scores. Here, the theoretical concept of Peak Ground Truth (PGT) is introduced. PGT marks the point beyond which an increase in similarity with the reference annotation stops translating to better Real World Model Performance (RWMP). Additionally, a quantitative technique to approximate PGT by computing inter- and intra-rater reliability is proposed. Finally, three categories of PGT-aware strategies to evaluate and improve model performance are reviewed.
translated by 谷歌翻译
Accurate PhotoVoltaic (PV) power generation forecasting is vital for the efficient operation of Smart Grids. The automated design of such accurate forecasting models for individual PV plants includes two challenges: First, information about the PV mounting configuration (i.e. inclination and azimuth angles) is often missing. Second, for new PV plants, the amount of historical data available to train a forecasting model is limited (cold-start problem). We address these two challenges by proposing a new method for day-ahead PV power generation forecasts called AutoPV. AutoPV is a weighted ensemble of forecasting models that represent different PV mounting configurations. This representation is achieved by pre-training each forecasting model on a separate PV plant and by scaling the model's output with the peak power rating of the corresponding PV plant. To tackle the cold-start problem, we initially weight each forecasting model in the ensemble equally. To tackle the problem of missing information about the PV mounting configuration, we use new data that become available during operation to adapt the ensemble weights to minimize the forecasting error. AutoPV is advantageous as the unknown PV mounting configuration is implicitly reflected in the ensemble weights, and only the PV plant's peak power rating is required to re-scale the ensemble's output. AutoPV also allows to represent PV plants with panels distributed on different roofs with varying alignments, as these mounting configurations can be reflected proportionally in the weighting. Additionally, the required computing memory is decoupled when scaling AutoPV to hundreds of PV plants, which is beneficial in Smart Grids with limited computing capabilities. For a real-world data set with 11 PV plants, the accuracy of AutoPV is comparable to a model trained on two years of data and outperforms an incrementally trained model.
translated by 谷歌翻译
Human-technology collaboration relies on verbal and non-verbal communication. Machines must be able to detect and understand the movements of humans to facilitate non-verbal communication. In this article, we introduce ongoing research on human activity recognition in intralogistics, and show how it can be applied in industrial settings. We show how semantic attributes can be used to describe human activities flexibly and how context informantion increases the performance of classifiers to recognise them automatically. Beyond that, we present a concept based on a cyber-physical twin that can reduce the effort and time necessary to create a training dataset for human activity recognition. In the future, it will be possible to train a classifier solely with realistic simulation data, while maintaining or even increasing the classification performance.
translated by 谷歌翻译
Quantifying the perceptual similarity of two images is a long-standing problem in low-level computer vision. The natural image domain commonly relies on supervised learning, e.g., a pre-trained VGG, to obtain a latent representation. However, due to domain shift, pre-trained models from the natural image domain might not apply to other image domains, such as medical imaging. Notably, in medical imaging, evaluating the perceptual similarity is exclusively performed by specialists trained extensively in diverse medical fields. Thus, medical imaging remains devoid of task-specific, objective perceptual measures. This work answers the question: Is it necessary to rely on supervised learning to obtain an effective representation that could measure perceptual similarity, or is self-supervision sufficient? To understand whether recent contrastive self-supervised representation (CSR) may come to the rescue, we start with natural images and systematically evaluate CSR as a metric across numerous contemporary architectures and tasks and compare them with existing methods. We find that in the natural image domain, CSR behaves on par with the supervised one on several perceptual tests as a metric, and in the medical domain, CSR better quantifies perceptual similarity concerning the experts' ratings. We also demonstrate that CSR can significantly improve image quality in two image synthesis tasks. Finally, our extensive results suggest that perceptuality is an emergent property of CSR, which can be adapted to many image domains without requiring annotations.
translated by 谷歌翻译
Semantic segmentation from aerial views is a vital task for autonomous drones as they require precise and accurate segmentation to traverse safely and efficiently. Segmenting images from aerial views is especially challenging as they include diverse view-points, extreme scale variation and high scene complexity. To address this problem, we propose an end-to-end multi-class semantic segmentation diffusion model. We introduce recursive denoising which allows predicted error to propagate through the denoising process. In addition, we combine this with a hierarchical multi-scale approach, complementary to the diffusion process. Our method achieves state-of-the-art results on UAVid and on the Vaihingen building segmentation benchmark.
translated by 谷歌翻译
This paper shows the implementation of reinforcement learning (RL) in commercial flowsheet simulator software (Aspen Plus V12) for designing and optimising a distillation sequence. The aim of the SAC agent was to separate a hydrocarbon mixture in its individual components by utilising distillation. While doing so it tries to maximise the profit produced by the distillation sequence. All actions of the agent were set by the SAC agent in Python and communicated in Aspen Plus via an API. Here the distillation column was simulated by use of the build-in RADFRAC column. With this a connection was established for data transfer between Python and Aspen and the agent succeeded to show learning behaviour, while increasing profit. Although results were generated, the use of Aspen was slow (190 hours) and Aspen was found unsuitable for parallelisation. This makes that Aspen is incompatible for solving RL problems. Code and thesis are available at https://github.com/lollcat/Aspen-RL
translated by 谷歌翻译
Angluin的L*算法使用会员资格和等价查询了解了常规语言的最低(完整)确定性有限自动机(DFA)。它的概率近似正确(PAC)版本用足够大的随机会员查询替换等效查询,以使答案获得高级信心。因此,它可以应用于任何类型的(也是非规范)设备,可以将其视为合成自动机的算法,该算法根据观测值抽象该设备的行为。在这里,我们对Angluin的PAC学习算法对通过引入一些噪音从DFA获得的设备感兴趣。更确切地说,我们研究盎格鲁因算法是否会降低噪声并产生与原始设备更接近原始设备的DFA。我们提出了几种介绍噪声的方法:(1)嘈杂的设备将单词的分类W.R.T.倒置。具有很小概率的DFA,(2)嘈杂的设备在询问其分类W.R.T.之前用小概率修改了单词的字母。 DFA和(3)嘈杂的设备结合了W.R.T.单词的分类。 DFA及其分类W.R.T.柜台自动机。我们的实验是在数百个DFA上进行的。直言不讳地表明,我们的主要贡献表明:(1)每当随机过程产生嘈杂的设备时,盎格鲁因算法的行为都很好,(2)但使用结构化的噪声却很差,并且(3)几乎肯定是随机性的产量具有非竞争性语言的系统。
translated by 谷歌翻译
具有波束成型的天线阵列在较高的载波频率下克服了高空间路径损耗。但是,必须正确对齐光束,以确保用户设备(UE)辐射(并接收)最高功率。尽管有一些方法可以通过某种形式的层次搜索来详尽地搜索最佳光束,但它们可能很容易返回具有小型梁增益的本地最佳解决方案。其他方法通过利用上下文信息(例如UE的位置或来自相邻基站(BS)的信息的位置)来解决此问题,但是计算和传达此附加信息的负担可能很高。迄今为止,基于机器学习的方法受到随附的培训,性能监控和部署复杂性的影响,从而阻碍了其规模的应用。本文提出了一种解决初始光束发现问题的新方法。它是可扩展的,易于调整和实施。我们的算法基于一个推荐系统,该系统基于培训数据集将组(即UES)和偏好(即来自代码簿中的光束)关联。每当需要提供新的UE时,我们的算法都会返回此用户群集中的最佳光束。我们的仿真结果证明了我们方法的效率和鲁棒性,不仅在单个BS设置中,而且在需要几个BS之间协调的设置中。我们的方法在给定任务中始终优于标准基线算法。
translated by 谷歌翻译